
Moderators are instructed to remove any post praising, supporting or representing any listed figure.
Anton Shekhovtsov, an expert in far-right groups, said he was “confused about the methodology.” The company bans an impressive array of American and British groups, he said, but relatively few in countries where the far right can be more violent, particularly Russia or Ukraine.
Countries where Facebook faces government pressure seem to be better covered than those where it does not. Facebook blocks dozens of far-right groups in Germany, where the authorities scrutinize the social network, but only one in neighboring Austria.
The list includes a growing number of groups with one foot in the political mainstream, like the far-right Golden Dawn, which holds seats in the Greek and European Union parliaments.
For a tech company to draw these lines is “extremely problematic,” said Jonas Kaiser, a Harvard University expert on online extremism. “It puts social networks in the position to make judgment calls that are traditionally the job of the courts.”
The bans are a kind of shortcut, said Sana Jaffrey, who studies Indonesian politics at the University of Chicago. Asking moderators to look for a banned name or logo is easier than asking them to make judgment calls about when political views are dangerous.
But that means that in much of Asia and the Middle East, Facebook bans hard-line religious groups that represent significant segments of society. Blanket prohibitions, Ms. Jaffrey said, amount to Facebook shutting down one side in national debates.
And its decisions often skew in favor of governments, which can fine or regulate Facebook.
In Sri Lanka, Facebook removed posts commemorating members of the Tamil minority who died in the country’s civil war. Facebook bans any positive mention of Tamil rebels, though users can praise government forces who were also guilty of atrocities.
Kate Cronin-Furman, a Sri Lanka expert at University College London, said this prevented Tamils from memorializing the war, allowing the government to impose its version of events — entrenching Tamils’ second-class status.
The View From Menlo Park
Facebook’s policies might emerge from well-appointed conference rooms, but they are executed largely by moderators in drab outsourcing offices in distant locations like Morocco and the Philippines.
Facebook says moderators are given ample time to review posts and don’t have quotas. Moderators say they face pressure to review about a thousand pieces of content per day. They have eight to 10 seconds for each post, longer for videos.
The moderators describe feeling in over their heads. For some, pay is tied to speed and accuracy. Many last only a few exhausting months. Front-line moderators have few mechanisms for alerting Facebook to new threats or holes in the rules — and little incentive to try, one said.
One moderator described an officewide rule to approve any post if no one on hand can read the appropriate language. This may have contributed to violence in Sri Lanka and Myanmar, where posts encouraging ethnic cleansing were routinely allowed to stay up.
Facebook says that any such practice would violate its rules, which include contingencies for reviewing posts in unfamiliar languages. Justin Osofsky, a Facebook vice president who oversees these contracts, said any corner-cutting probably came from midlevel managers at outside companies acting on their own.
This hints at a deeper problem. Facebook has little visibility into the giant outsourcing companies, which largely police themselves, and has at times struggled to control them. And because Facebook relies on the companies to support its expansion, its leverage over them is limited.
One hurdle to reining in inflammatory speech on Facebook may be Facebook itself. The platform relies on an algorithm that tends to promote the most provocative content, sometimes of the sort the company says it wants to suppress.
Facebook could blunt that algorithm or slow the company’s expansion into new markets, where it has proved most disruptive. But the social network instills in employees an almost unquestioned faith in their product as a force for good.
When Ms. Su, the News Feed engineer, was asked if she believed research finding that more Facebook usage correlates with more violence, she replied, “I don’t think so.”
“As we have greater reach, as we have more people engaging, that raises the stakes,” she said. “But I also think that there’s greater opportunity for people to be exposed to new ideas.”
Still, even some executives hesitate when asked whether the company has found the right formula.
Richard Allan, a London-based vice president who is also a sitting member of the House of Lords, said a better model might be “some partnership arrangement” with “government involved in setting the standards,” even if not all governments can be trusted with this power.
Mr. Fishman, the Facebook terrorism expert, said the company should consider deferring more decisions to moderators, who may better understand the nuances of local culture and politics.
But at company headquarters, the most fundamental questions of all remain unanswered: What sorts of content lead directly to violence? When does the platform exacerbate social tensions?
Rosa Birch, who leads an internal crisis team, said she and her colleagues had been posing these questions for years. They are making progress, she said, but will probably never have definitive answers.
But without a full understanding of the platform’s impact, most policies are just ad hoc responses to problems as they emerge. Employees make a tweak, wait to see what happens, then tweak again — as if repairing an airplane midflight.
In the meantime, the company continues to expand its reach to more users in more countries.
“One of the reasons why it’s hard to talk about,” Mr. Fishman said, “is because there is a lack of societal agreement on where this sort of authority should lie.”
But, he said, “it’s harder to figure out what a better alternative is.”